Goto

Collaborating Authors

 globally optimal learning


Globally Optimal Learning for Structured Elliptical Losses

Neural Information Processing Systems

Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss. In structured problems, however, where there are multiple labels and structural constraints on the labels are imposed (or learned), robust optimization is challenging, and more often than not the loss used is simply the negative log-likelihood of a Gaussian Markov random field. Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss.


Reviews: Globally Optimal Learning for Structured Elliptical Losses

Neural Information Processing Systems

Lemma 1) is incrementally built on previous work. Based on related work, it's hard to contextualize their work in the existing literature. The readers may have the following important questions. A clear explanation is needed for this. This work is significant in that they provide optimality proof that leads to more efficient optimization method for a wide range of robust elliptical losses including Gaussian, Generalized Gaussian, Huber, etc.


Reviews: Globally Optimal Learning for Structured Elliptical Losses

Neural Information Processing Systems

All reviewers agreed that this paper makes an interesting contribution to NeurIPS. Please make sure to take the reviewers' comments in consideration for the camera-ready version, in particular the contextualization of the work.


Globally Optimal Learning for Structured Elliptical Losses

Neural Information Processing Systems

Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss. In structured problems, however, where there are multiple labels and structural constraints on the labels are imposed (or learned), robust optimization is challenging, and more often than not the loss used is simply the negative log-likelihood of a Gaussian Markov random field. Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss.


Globally Optimal Learning for Structured Elliptical Losses

Wald, Yoav, Noy, Nofar, Elidan, Gal, Wiesel, Ami

Neural Information Processing Systems

Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss. In structured problems, however, where there are multiple labels and structural constraints on the labels are imposed (or learned), robust optimization is challenging, and more often than not the loss used is simply the negative log-likelihood of a Gaussian Markov random field. Heavy tailed and contaminated data are common in various applications of machine learning. A standard technique to handle regression tasks that involve such data, is to use robust losses, e.g., the popular Huber's loss.